Leveraging Learners by Gradient

نویسنده

  • David Helmbold
چکیده

Recent interpretations of the Adaboost algorithm view it as performing a gradient descent on a potential function. Simply changing the potential function allows one to create new algorithms related to AdaBoost. However, these new algorithms are generally not known to have the formal boosting property. This paper examines the question of which potential functions lead to new algorithms that are boosters. The two main results are general sets of conditions on the potential; one set implies that the resulting algorithm is a booster, while the other implies that the algorithm is not. These conditions are applied to previously studied potential functions , such as those used by LogitBoost and Doom II.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Leveraging for Regression

In this paper we examine master regression algorithms that leverage base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its good theoretical bounds. We present three gradient descent leveraging algorithms for regression and prov...

متن کامل

A Geometric Approach to Leveraging

AdaBoost is a popular and eeective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leverag...

متن کامل

I Nefficiency of Stochastic Gradient Descent with Larger Mini - Batches ( and More Learners )

Stochastic Gradient Descent (SGD) and its variants are the most important optimization algorithms used in large scale machine learning. Mini-batch version of stochastic gradient is often used in practice for taking advantage of hardware parallelism. In this work, we analyze the effect of mini-batch size over SGD convergence for the case of general non-convex objective functions. Building on the...

متن کامل

Leveraging Engagement and Participation in e-Learning with Trust

This article describes a project that builds on authors previously body of knowledge on Trust and uses it to leverage higher levels of engagement in eLearning contexts. The presented research aims to investigate on unobtrusively strategies to evaluate a toolset of Trust indicators that monitor trust levels thus facilitating the deployment of trust level regulation interventions. So far results ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999